语言,视觉和多模式预审查的大量融合正在出现。在这项工作中,我们介绍了通用多模式基础模型BEIT-3,该模型BEIT-3,该模型在视觉和视觉任务上都实现了最新的转移性能。具体来说,我们从三个方面提出了大融合:骨干架构,预训练任务和模型扩展。我们介绍了多道路变压器进行通用建模,其中模块化体系结构可以实现深融合和模态特定的编码。基于共享的骨干,我们以统一的方式对图像(Imglish),文本(英语)和图像文本对(“平行句子”)进行蒙面的“语言”建模。实验结果表明,BEIT-3在对象检测(COCO),语义分割(ADE20K),图像分类(Imagenet),视觉推理(NLVR2),视觉询问答案(VQAV2),图像字幕上获得最先进的性能(可可)和跨模式检索(Flickr30k,可可)。
translated by 谷歌翻译
Vision-Language Transformers can be learned without human labels (e.g. class labels, bounding boxes, etc). Existing work, whether explicitly utilizing bounding boxes or patches, assumes that the visual backbone must first be trained on ImageNet class prediction before being integrated into a multimodal linguistic pipeline. We show that this is not necessary and introduce a new model Vision-Language from Captions (VLC) built on top of Masked Auto-Encoders that does not require this supervision. In fact, in a head-to-head comparison between ViLT, the current state-of-the-art patch-based vision-language transformer which is pretrained with supervised object classification, and our model, VLC, we find that our approach 1. outperforms ViLT on standard benchmarks, 2. provides more interpretable and intuitive patch visualizations, and 3. is competitive with many larger models that utilize ROIs trained on annotated bounding-boxes.
translated by 谷歌翻译
Mixup is a popular data augmentation technique based on creating new samples by linear interpolation between two given data samples, to improve both the generalization and robustness of the trained model. Knowledge distillation (KD), on the other hand, is widely used for model compression and transfer learning, which involves using a larger network's implicit knowledge to guide the learning of a smaller network. At first glance, these two techniques seem very different, however, we found that ``smoothness" is the connecting link between the two and is also a crucial attribute in understanding KD's interplay with mixup. Although many mixup variants and distillation methods have been proposed, much remains to be understood regarding the role of a mixup in knowledge distillation. In this paper, we present a detailed empirical study on various important dimensions of compatibility between mixup and knowledge distillation. We also scrutinize the behavior of the networks trained with a mixup in the light of knowledge distillation through extensive analysis, visualizations, and comprehensive experiments on image classification. Finally, based on our findings, we suggest improved strategies to guide the student network to enhance its effectiveness. Additionally, the findings of this study provide insightful suggestions to researchers and practitioners that commonly use techniques from KD. Our code is available at https://github.com/hchoi71/MIX-KD.
translated by 谷歌翻译
深度神经网络是参数化的数千或数百万个参数,并且在许多分类问题中表现出巨大的成功。然而,大量参数使得难以将这些模型集成到智能手机和可穿戴设备的边缘设备中。为了解决这个问题,知识蒸馏(KD)已被广泛采用,它使用预先训练的高容量网络来培训更小的网络,适用于边缘设备。本文首次研究了使用KD用于可穿戴设备的时间序列数据的适用性和挑战。 KD的成功应用需要在培训期间需要具体的数据增强方法。然而,如果在KD期间存在用于选择增强方法的相干策略,则尚不清楚。在本文中,我们报告了详细研究的结果,这些研究比较和对比基于KD的人类活动分析中的各种常见选择和一些混合数据增强策略。该领域的研究通常是有限的,因为公共领域没有可穿戴设备的全面数据库。我们的研究将数据库视为公共规模的数据库,以源于大规模介入研究的人类活动和久坐行为。我们发现,在KD期间的数据增强技术的选择具有对最终性能的可变影响程度,并发现最佳网络选择以及数据增强策略特定于手头的数据集。但是,我们还通过一系列关于数据库提供强大基线表现的一般建议。
translated by 谷歌翻译
由于能够提高几个诊断任务的性能,深度神经网络越来越多地被用作医疗保健应用中的辅助工具。然而,由于基于深度学习系统的可靠性,概括性和可解释性的实际限制,这些方法在临床环境中不被广泛采用。因此,已经开发了方法,这在网络培训期间强加了额外的限制,以获得更多的控制,并改善探讨他们在医疗界的接受。在这项工作中,我们调查使用正交球(OS)约束对胸部X射线图像进行Covid-19案例的分类的益处。 OS约束可以写成一个简单的正交性术语,其与分类网络训练期间的标准交叉熵损耗结合使用。以前的研究表明,在对深度学习模型上对这种限制应用于应用这些限制方面表现出显着的益处。我们的研究结果证实了这些观察结果,表明正常性损失函数有效地通过Gradcam可视化,增强的分类性能和减少的模型校准误差产生了改进的语义本地化。我们的方法分别实现了两性和三类分类的准确性提高1.6%和4.8%;找到了应用数据增强的模型的类似结果。除了这些发现之外,我们的工作还提出了OS规范器在医疗保健中的新应用,提高了CoVID-19分类深度学习模型的后HOC可解释性和性能,以便于在临床环境中采用这些方法。我们还确定了我们将来可以探索进一步研究的战略的局限性。
translated by 谷歌翻译